Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects
نویسندگان
چکیده
Abstract Theoretical accounts of the N400 are divided as to whether amplitude response a stimulus reflects extent which was predicted, is semantically similar its preceding context, or both. We use state-of-the-art machine learning tools investigate these three best supported by evidence. GPT-3, neural language model trained compute conditional probability any word based on words that precede it, used operationalize contextual predictability. In particular, we an information-theoretic construct known surprisal (the negative logarithm probability). Contextual semantic similarity operationalized using two high-quality co-occurrence-derived vector-based meaning representations for words: GloVe and fastText. The cosine between vector representation sentence frame final derive estimates. A series regression models were constructed, where variables, along with cloze plausibility ratings, predict single trial amplitudes recorded from healthy adults they read sentences whose varied in predictability, plausibility, relationship likeliest completion. Statistical comparison indicated GPT-3 provided account suggested apparently disparate effects expectancy, can be reduced variation predictability words. results argued support predictive coding human network.
منابع مشابه
Word surprisal predicts N400 amplitude during reading
We investigated the effect of word surprisal on the EEG signal during sentence reading. On each word of 205 experimental sentences, surprisal was estimated by three types of language model: Markov models, probabilistic phrasestructure grammars, and recurrent neural networks. Four event-related potential components were extracted from the EEG of 24 readers of the same sentences. Surprisal estima...
متن کاملHow robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects
Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language processing. However, widely-replicated prediction effects may not mean that prediction is necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading in their second language do not predict upcoming words as native readers do. Journal of Memory and Language...
متن کاملA Computational Model of Prediction in Human Parsing: Unifying Locality and Surprisal Effects
There is strong evidence that human sentence processing is incremental, i.e., that structures are built word by word. Recent experiments show that the processor also predicts upcoming linguistic material on the basis of previous input. We present a computational model of human parsing that is based on a variant of tree-adjoining grammar and includes an explicit mechanism for generating and veri...
متن کاملN400 responses of children with primary language disorder: intervention effects.
Event-related brain potentials were examined in 6 to 8-year-old children with primary language disorder before and after a 5-week narrative-based language intervention. Participants listened to sentences ending with semantically congruous or incongruous words. By comparison with typical controls, the children with primary language disorder exhibited no pretreatment differences in their N400 res...
متن کاملQuestion Prediction Language Model
This paper proposes the use of a language representation that specifies the relationship between terms of a sentence using question words. The proposed representation is tailored to help the search for documents containing an answer for a natural language question. This study presents the construction of this language model, the framework where it is used, and its evaluation.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurobiology of language
سال: 2023
ISSN: ['2641-4368']
DOI: https://doi.org/10.1162/nol_a_00105